Using Broadcasting to Implement Distributed Shared Memory Efficiently

نویسندگان

  • Andrew S. Tanenbaum
  • M. Frans Kaashoek
  • Henri E. Bal
چکیده

Parallel computers come in two varieties: those with shared memory and those without. The former are hard to build; the latter are hard to program. In this paper we propose a hybrid form that combines the best properties of each.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Comparison of Two Paradigms for Distributed Shared Memory

This paper compares two paradigms for Distributed Shared Memory on loosely coupled computing systems: the shared data-object model as used in Orca, a programming language specially designed for loosely coupled computing systems and the Shared Virtual Memory model. For both paradigms two systems are described, one using only point-to-point messages, the other using broadcasting as well. The two ...

متن کامل

Efficient Object-Based Software Transactions

This paper proposes an efficient object-based implementation of non-blocking software transactions. We use ideas from software distributed shared memory to efficiently implement transactions with little overhead for non-transactional code. Rather than emulating a flat transactional memory, our scheme is object-based, which allows compiler optimizations to provide better performance for long-run...

متن کامل

Orca: a Portable User-Level Shared Object System *

Orca is an object-based distributed shared memory system that is designed for writing portable and efficient parallel programs. Orca hides the communication substrate from the programmer by providing an abstract communication model based on shared objects. Mutual exclusion and condition synchronization are cleanly integrated in the model. Orca has been implemented using a layered system, consis...

متن کامل

Flat Combining Synchronized Global Data Structures

The implementation of scalable synchronized data structures is notoriously difficult. Recent work in shared-memory multicores introduced a new synchronization paradigm called flat combining that allows many concurrent accessors to cooperate efficiently to reduce contention on shared locks. In this work we introduce this paradigm to a domain where reducing communication is paramount: distributed...

متن کامل

Fsc: a Sisal Compiler for Both Distributed-and Shared-memory Machines Fsc: a Sisal Compiler for Both Distributed-and Shared-memory Machines

This paper describes a prototype Sisal compiler that supports distributed-as well as shared-memory machines. The compiler, fsc, modiies the code-generation phase of the optimizing Sisal compiler, osc, to use the Filaments library as a run-time system. Filaments eeciently supports ne-grain parallelism and a shared-memory programming model. Using ne-grain threads makes it possible to implement re...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1994